control ai
Exclusive: The British Public Wants Stricter AI Rules Than Its Government Does
Even as Silicon Valley races to build more powerful artificial intelligence models, public opinion on the other side of the Atlantic remains decidedly skeptical of the influence of tech CEOs when it comes to regulating the sector, with the vast majority of Britons worried about the safety of new AI systems. The concerns, highlighted in a new poll shared exclusively with TIME, come as world leaders and tech bosses--from U.S. Vice President JD Vance, France's Emmanuel Macron and India's Narendra Modi to OpenAI chief Sam Altman and Google's Sundar Pichai--prepare to gather in Paris next week to discuss the rapid pace of developments in AI. The new poll shows that 87% of Brits would back a law requiring AI developers to prove their systems are safe before release, with 60% in favor of outlawing the development of "smarter-than-human" AI models. Just 9%, meanwhile, said they trust tech CEOs to act in the public interest when discussing AI regulation. The survey was conducted by the British pollster YouGov on behalf of Control AI, a non-profit focused on AI risks.
- Europe > France (0.56)
- Asia > India (0.56)
- Europe > United Kingdom (0.51)
- North America > United States > California (0.25)
'The outcome could be extinction': Elon Musk-backed researcher warns there is NO proof AI can be controlled - and says tech should be shelved NOW
A researcher backed by Elon Musk is re-sounding the alarm about AI's threat to humanity after finding no proof the tech can be controlled. Dr Roman V Yampolskiy, an AI safety expert, has received funding from the billionaire to study advanced intelligent systems that is the focus on his upcoming book'AI: Unexplainable, Unpredictable, Uncontrollable. The book examines how AI has the potential to dramatically reshape society, not always to our advantage, and has the'potential to cause an existential catastrophe.' Yampsolskiy, who is a professor at the University of Louisville, conducted an'examination of the scientific literature on AI' and concluded there is no proof that the tech could be stopped from going rogue. To fully control AI, he suggested that it needs to be modifiable with'undo' options, limitable, transparent, and easy to understand in human language.
Ministers not doing enough to control AI, says UK professor
One of the professors at the forefront of artificial intelligence has said ministers are not doing enough to protect against the dangers of super-intelligent machines in the future. In the latest contribution to the debate about the safety of the ever-quickening development of AI, Prof Stuart Russell told the Times that the government was reluctant to regulate the industry despite the concerns that the technology could get out of control and threaten the future of humanity. Russell, a lecturer at the University of California in Berkeley and former adviser to the US and UK governments, told the Times he was concerned that ChatGPT, which was released in November, could become part of a super-intelligent machine that could not be constrained. "How do you maintain power over entities more powerful than you – for ever?" he asked. "If you don't have an answer, then stop doing the research. "The stakes couldn't be higher: if we don't control our own civilisation, we have no say in whether we continue to exist." After the release of ChatGPT to the public last year, which has been used to write prose and has already worried lecturers and teachers over its use in universities and schools, the debate has intensified over its safety in the long-term. Elon Musk, the Tesla founder and Twitter owner, and the Apple co-founder Steve Wozniak, along with 1,000 AI experts, wrote a letter to warn that there was an "out-of-control race" going on at AI labs and called for a pause on the creation of giant-scale AI. The letter warned the labs were developing "ever more powerful digital minds that no one, not even their creators, can understand, predict or reliably control". There is also concern about its wider application. A House of Lords committee this week heard evidence from Sir Lawrence Freedman, a war studies professor, who spoke about the concerns on how AI might be used in future wars. Google's rival, Bard, is due to be released in the EU later this year. Russell himself previously worked for the UN on how to monitor the nuclear test-ban treaty, and was asked to work with Whitehall earlier this year. He said: "The Foreign Office … talked to a lot of people and they concluded that loss of control was a plausible and extremely high-significance outcome." "And then the government came out with a regulatory approach that says: 'Nothing to see here … we'll welcome the AI industry as if we were talking about making cars or something like that'.
- Europe > United Kingdom (0.57)
- North America > United States > California (0.26)
- Government > Military (1.00)
- Government > Regional Government > Europe Government > United Kingdom Government (0.57)
Welcome to the Underground! GIVE US YOUR BALLS
Predictions and trends for AI obfuscation and containment The potential impact of AI obfuscation on AI development and society Conclusion: As AI systems continue to grow and evolve, understanding AI obfuscation becomes increasingly crucial. This book has provided an accessible and straightforward guide to the concepts of AI obfuscation, containment, and the use of metaphors to explain these ideas. By learning about these topics, readers can engage in informed discussions about the development and ethical implications of AI obfuscation and its role in shaping the future of artificial intelligence. NanoCheeZe MEQUAVIS Explain the lyric "Your only still alive because i made a promise" as being about sans following papyrus' directives despite his wishes.
Using AI to control AI: How to Prevent Creating Biased Datasets
The team responsible for the project scavenged the internet for these images, each one associated with some keywords. However, not all queries (automatic Google searches) were successful. After 100 results, the relevance of images was diminishing. Depending on the source, the team had to set a threshold on the number of collected images. After storing them and analyzing the keywords associated with every image, they were clustered based on common factors, then the labels were chosen by using a statistical methodology.
How to Control AI that Becomes Too Advanced?
Artificial Intelligence is rapidly becoming more advanced. One of the organisations working on AI is OpenAI; the not-for-profit artificial intelligence research organisation co-founded by Elon Musk. Last week, they produced a paper demonstrating the progress they have made on predictive text software. The AI that they developed, called GPT2, is so efficient in writing a text based on just a few lines of input, that OpenAI decided not to release the comprehensive research to the public. Already, GPT2 has been described as the text version of deep fakes.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.49)
Frameworks Seek to Control AI
AI governance frameworks are emerging as guard rails for controlling algorithms that are playing a growing role in human decision-making. Among the goals is managing the consequences of those decisions. Business consultants and professional services firms in particular have focused on new ways to assess and control AI algorithms as a way of building trust. Among them is KPMG, which launched a new framework this week called AI in Control designed to assess algorithms underlying business applications to spot bias and enforce governance rules to ensure ethical AI. The goal of KPMG's framework is fostering AI algorithms that are accurate, addressing what the company warns is the current "trust gap" among business executives clamoring for "explainable AI."
To control AI, we need to understand more about humans
From Frankenstein to I, Robot, we have for centuries been intrigued with and terrified of creating beings that might develop autonomy and free will. And now that we stand on the cusp of the age of ever-more-powerful artificial intelligence, the urgency of developing ways to ensure our creations always do what we want them to do is growing. For some in AI, like Mark Zuckerberg, AI is just getting better all the time and if problems come up, technology will solve them. But for others, like Elon Musk, the time to start figuring out how to regulate powerful machine-learning-based systems is now. Not because I think the doomsday scenario that Hollywood loves to scare us with is around the corner but because Zuckerberg's confidence that we can solve any future problems is contingent on Musk's insistence that we need to "learn as much as possible" now.
Elon Musk spends $10 million on new projects to control AI
He may be a driving force behind artificial intelligence, but Elon Musk is also one of its most outspoken critics. The Tesla-founder has previously compared AI research to'summoning the devil' and claims that intelligent robots could someday spell the end for humanity. Now the billionaire has revealed he is funding 37 research projects to make sure humans can control future robotic systems. He may be a driving force behind artificial intelligence, but Elon Musk is also one of its most outspoken critics. The Tesla-founder has previously compared AI research to'summoning the devil'.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.16)
- North America > United States > Massachusetts (0.06)
- North America > United States > California > Alameda County > Berkeley (0.05)
Good and Bad Beyond the Control of Researchers: Who Controls AI?
Artificial Intelligence technology development, application, and production are growing rapidly. As we being to understand the scope of the change that lies ahead, two simple questions come to mind. First, is it even possible to ensure that the technology is developed solely for the benefit of the humankind and not to cause harm? Second, if such an ideal is achieved, who controls and monitors it? The first assumption we have to make – and frankly it is not a big leap – is that artificial intelligence technology, like other technologies, can be used to do both good and bad.